46 research outputs found
More than a Million Ways to Be Pushed: A High-Fidelity Experimental Dataset of Planar Pushing
Pushing is a motion primitive useful to handle objects that are too large,
too heavy, or too cluttered to be grasped. It is at the core of much of robotic
manipulation, in particular when physical interaction is involved. It seems
reasonable then to wish for robots to understand how pushed objects move.
In reality, however, robots often rely on approximations which yield models
that are computable, but also restricted and inaccurate. Just how close are
those models? How reasonable are the assumptions they are based on? To help
answer these questions, and to get a better experimental understanding of
pushing, we present a comprehensive and high-fidelity dataset of planar pushing
experiments. The dataset contains timestamped poses of a circular pusher and a
pushed object, as well as forces at the interaction.We vary the push
interaction in 6 dimensions: surface material, shape of the pushed object,
contact position, pushing direction, pushing speed, and pushing acceleration.
An industrial robot automates the data capturing along precisely controlled
position-velocity-acceleration trajectories of the pusher, which give dense
samples of positions and forces of uniform quality.
We finish the paper by characterizing the variability of friction, and
evaluating the most common assumptions and simplifications made by models of
frictional pushing in robotics.Comment: 8 pages, 10 figure
AN ACTIVE NON-INTRUSIVE SYSTEM IDENTIFICATION APPROACH FOR CARDIOVASCULAR HEALTH MONITORING
In this study a novel active non-intrusive system identification paradigm is developed for the purpose of cardiovascular health monitoring. The proposed approach seeks to utilize a collocated actuator sensor unit devised from the common blood pressure cuff to simultaneously 1) produce rich transmural blood pressure waves that propagate through the cardiovascular system and 2) to make measurements of these rich peripheral transmural blood pressures utilizing the pressure oscillations produced within the cuffs bladder in order to reproduce the central aortic blood pressure accurately. To achieve this end a mathematical model of the cardiovascular system is developed to model the wave propagation dynamics of the external (excitation applied by the cuff) and internal (excitation produced by the heart) blood pressure waveforms through the cardiovascular system. Next a system identification protocol is developed in which rich transmural blood pressures are recorded and used to identify the parameters characterizing the model. The peripheral blood pressures are used in tandem with the characterized model to reconstruct the central aortic blood pressure waveform. The results of this study indicate the developed protocol can reliably and accurately reproduced the central aortic blood pressure and that it can outperform its intrusive passive counterpart (the Individualized Transfer Function methodology). The root-mean-square error in waveform reproduction, pulse pressure error and systolic pressure errors were evaluated to be 3.31 mmHg, 1.36 mmHg and 0.06 mmHg respectively for the active nonintrusive methodology while for the passive intrusive counterpart the same errors were evaluated to be 4.12 mmHg, 1.59 mmHg and 2.67 mmHg indicating the superiority of the proposed approach
Learning the Dynamics of Compliant Tool-Environment Interaction for Visuo-Tactile Contact Servoing
Many manipulation tasks require the robot to control the contact between a
grasped compliant tool and the environment, e.g. scraping a frying pan with a
spatula. However, modeling tool-environment interaction is difficult,
especially when the tool is compliant, and the robot cannot be expected to have
the full geometry and physical properties (e.g., mass, stiffness, and friction)
of all the tools it must use. We propose a framework that learns to predict the
effects of a robot's actions on the contact between the tool and the
environment given visuo-tactile perception. Key to our framework is a novel
contact feature representation that consists of a binary contact value, the
line of contact, and an end-effector wrench. We propose a method to learn the
dynamics of these contact features from real world data that does not require
predicting the geometry of the compliant tool. We then propose a controller
that uses this dynamics model for visuo-tactile contact servoing and show that
it is effective at performing scraping tasks with a spatula, even in scenarios
where precise contact needs to be made to avoid obstacles.Comment: 6th Conference on Robotic Learning (CoRL 2022), Auckland, New
Zealand. 8 pages + references + appendi
VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable Objects
Deformable objects manipulation can benefit from representations that
seamlessly integrate vision and touch while handling occlusions. In this work,
we present a novel approach for, and real-world demonstration of, multimodal
visuo-tactile state-estimation and dynamics prediction for deformable objects.
Our approach, VIRDO++, builds on recent progress in multimodal neural implicit
representations for deformable object state-estimation [1] via a new
formulation for deformation dynamics and a complementary state-estimation
algorithm that (i) maintains a belief over deformations, and (ii) enables
practical real-world application by removing the need for privileged contact
information. In the context of two real-world robotic tasks, we show:(i)
high-fidelity cross-modal state-estimation and prediction of deformable objects
from partial visuo-tactile feedback, and (ii) generalization to unseen objects
and contact formations
Combining Physical Simulators and Object-Based Networks for Control
Physics engines play an important role in robot planning and control;
however, many real-world control problems involve complex contact dynamics that
cannot be characterized analytically. Most physics engines therefore employ .
approximations that lead to a loss in precision. In this paper, we propose a
hybrid dynamics model, simulator-augmented interaction networks (SAIN),
combining a physics engine with an object-based neural network for dynamics
modeling. Compared with existing models that are purely analytical or purely
data-driven, our hybrid model captures the dynamics of interacting objects in a
more accurate and data-efficient manner.Experiments both in simulation and on a
real robot suggest that it also leads to better performance when used in
complex control tasks. Finally, we show that our model generalizes to novel
environments with varying object shapes and materials.Comment: ICRA 2019; Project page: http://sain.csail.mit.ed
Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
This paper presents a robotic pick-and-place system that is capable of
grasping and recognizing both known and novel objects in cluttered
environments. The key new feature of the system is that it handles a wide range
of object categories without needing any task-specific training data for novel
objects. To achieve this, it first uses a category-agnostic affordance
prediction algorithm to select and execute among four different grasping
primitive behaviors. It then recognizes picked objects with a cross-domain
image classification framework that matches observed images to product images.
Since product images are readily available for a wide range of objects (e.g.,
from the web), the system works out-of-the-box for novel objects without
requiring any additional training data. Exhaustive experimental results
demonstrate that our multi-affordance grasping achieves high success rates for
a wide variety of objects in clutter, and our recognition algorithm achieves
high accuracy for both known and novel grasped objects. The approach was part
of the MIT-Princeton Team system that took 1st place in the stowing task at the
2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are
available online at http://arc.cs.princeton.eduComment: Project webpage: http://arc.cs.princeton.edu Summary video:
https://youtu.be/6fG7zwGfIk